测试时间培训通过使用自学意义的每个测试输入优化模型,可以随时适应新的测试分布。在本文中,我们将蒙版的自动编码器用于这个单样本学习问题。从经验上讲,我们的简单方法改善了许多视觉基准的概括,以进行分配变化。从理论上讲,我们根据偏见变化权衡取得的改进来表征。
translated by 谷歌翻译
一个人如何在没有特定任务的固定或任何模型修改的情况下将预训练的视觉模型调整为新颖的下游任务?受到NLP提示的启发,本文研究了视觉提示:在测试时间和新输入图像时,给定的输入输出图像示例示例,目标是自动生成输出图像,与给定的示例一致。我们表明,将这个问题作为简单的图像插入,实际上只是填充了串联的视觉提示图像中的一个孔 - 只要已经对正确的数据训练了介入算法,就非常有效。我们在我们策划的新数据集上训练蒙面的自动编码器-88K未标记的数字来自ARXIV上的学术报纸来源。我们将视觉提示应用于这些预处理的模型,并在各种下游图像到图像任务上展示结果,包括前景分割,单个对象检测,着色,边缘检测等。
translated by 谷歌翻译
我们利用从预先训练的视觉变压器(VIT)提取的深度特征,如密集的视觉描述符。我们证明这些特征是当从自我监督的Vit模型(Dino-Vit)中提取时,表现出几种打击性质:(i)特征在高空间分辨率下编码强大的高级信息 - 即,捕获精细的语义对象部件空间粒度和(ii)编码的语义信息跨相关但不同的对象类别(即超级类别)共享。这些属性允许我们设计强大的密集Vit描述符,便于各种应用,包括共分割,部分共分割和通信 - 通过将轻量级方法应用于深度染色特征(例如,分布/聚类)来实现。我们将这些应用程序进一步接受级别任务的领域 - 展示相关类别的对象如何在显着的姿势和外观变化下常规分段为语义部分。我们的方法,在定性和定量地评估的方法,实现最先进的部分共分割结果,以及最近监督方法的竞争结果,专门针对共同分割和对应关系。
translated by 谷歌翻译
图像分类模型可以取决于图像的多个不同语义属性。对分类器的决定的说明需要对这些属性进行发现和可视化这些属性。在这里,我们通过训练生成模型来具体解释基于分类器决策的多个属性来实现这一点的样式x。此类属性的自然来源是样式语的风格,已知在图像中生成语义有意义的维度。但是,由于标准GaN训练不依赖于分类器,所以它可能不代表对分类器决定很重要的这些属性,并且风格的尺寸可以表示无关属性。为了克服这一点,我们提出了一种培训程序,该培训程序包括分类器模型,以便学习特定于分类器的风格。然后从该空间中选择解释性属性。这些可用于可视化每个图像改变多个属性的效果,从而提供特定于图像的解释。我们将风格x应用于多个域,包括动物,叶子,面和视网膜图像。为此,我们展示了如何以不同方式修改图像以改变其分类器输出。我们的结果表明,该方法发现与语义上保持良好的属性,生成有意义的图像特定的解释,并且是在用户研究中测量的人为解释。
translated by 谷歌翻译
This work profoundly analyzes discrete self-supervised speech representations through the eyes of Generative Spoken Language Modeling (GSLM). Following the findings of such an analysis, we propose practical improvements to the discrete unit for the GSLM. First, we start comprehending these units by analyzing them in three axes: interpretation, visualization, and resynthesis. Our analysis finds a high correlation between the speech units to phonemes and phoneme families, while their correlation with speaker or gender is weaker. Additionally, we found redundancies in the extracted units and claim that one reason may be the units' context. Following this analysis, we propose a new, unsupervised metric to measure unit redundancies. Finally, we use this metric to develop new methods that improve the robustness of units clustering and show significant improvement considering zero-resource speech metrics such as ABX. Code and analysis tools are available under the following link.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality. Project page: https://wnhsu.github.io/ReVISE.
translated by 谷歌翻译
Voice Conversion (VC) is the task of making a spoken utterance by one speaker sound as if uttered by a different speaker, while keeping other aspects like content unchanged. Current VC methods, focus primarily on spectral features like timbre, while ignoring the unique speaking style of people which often impacts prosody. In this study, we introduce a method for converting not only the timbre, but also prosodic information (i.e., rhythm and pitch changes) to those of the target speaker. The proposed approach is based on a pretrained, self-supervised, model for encoding speech to discrete units, which make it simple, effective, and easy to optimise. We consider the many-to-many setting with no paired data. We introduce a suite of quantitative and qualitative evaluation metrics for this setup, and empirically demonstrate the proposed approach is significantly superior to the evaluated baselines. Code and samples can be found under https://pages.cs.huji.ac.il/adiyoss-lab/dissc/ .
translated by 谷歌翻译
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method supports existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.
translated by 谷歌翻译
随着人们对精神危机及其社会影响的认识,在许多国家,提供紧急支持的在线服务变得司空见惯。接受寻求帮助者和提供者之间讨论的培训的计算模型可以通过识别高危个人来支持预防自杀。但是,缺乏特定领域的模型,尤其是在低资源语言中,对自动检测自杀风险构成了重大挑战。我们提出了一个模型,该模型将预训练的语言模型(PLM)与固定的一组手动制作(并经过临床批准)的自杀提示相结合,然后进行了两阶段的微调过程。我们的模型达到了0.91 ROC-AUC和0.55的F2分数,甚至在对话的早期就表现出了一系列强大的基线,这对于该领域的实时检测至关重要。此外,该模型在性别和年龄段之间表现良好。
translated by 谷歌翻译